Mri and Pet Coregistration | a Crossvalidation of Spm and Air Running Title Mri and Pet Coregistration | a Crossvalidation Address for Correspondence

نویسندگان

  • Stefan J. Kiebel
  • John Ashburner
  • Jean-Baptiste Poline
  • Karl J. Friston
چکیده

Coregistration of functional PET and T1{weighted MR images is a necessary step for combining functional information from PET images with anatomical information in MR images. Several coregistration algorithms have been published and are used in functional brain imaging studies. In this paper, we present a comparison and crossvalidation of the two most widely used coregistration routines (Friston et al., 1995; Woods et al., 1993). Several transformations were applied to high{ resolution anatomical MR images to generate simulated PET images so that the exact (rigid body) transformations between each MR image and its associated simulated PET images were known. The estimation error of a coregistration in relation to the known transformation allows a comparison of the performance of di erent coregistration routines. Under the assumption that the simulated PET images embody the salient features of real PET images with respect to coregistration, this study shows that the routines examined reliably solve the MRI to PET coregistration problem. 2 Introduction In neuroimaging, coregistration of functional PET and anatomical MR images is essential for combining information from both image modalities. The bene t of having coregistered images is that it enables the visualization of functional PET data by superimposing it on a high-resolution anatomical MR image and therefore improves localization of neural activation loci. Several algorithms have been proposed by various authors (Andersson et al., 1995; Ardekani et al., 1995; Friston et al., 1995; Mangin et al., 1994; Woods et al., 1993). We chose two widely used coregistration algorithms (Woods et al., 1993; Friston et al., 1995) to crossvalidate on the same test data set. The rst routine is contained in the Automated Image Registration-package (AIR, version 2.03) and the second is part of the Statistical Parametric Mapping-package (SPM95). This crossvalidation includes an empirical validation of the SPM -routine, which had not been undertaken before, and facilitates a comparison of the performance of both routines. To generate a suitable PET data set, we used simulated PET images, which were generated by applying a series of transformations to some MR images. Rigid body transformations, applied to the simulated PET images were used to simulate misregistrations between the PET and MR data. After running both coregistration routines on the MR and simulated PET images, it was possible to assess and compare the errors associated with them. Methods We describe the basic concepts of MRI and PET coregistration, followed by an overview and theoretical comparison of the AIRand SPM -algorithms. We also describe the crossvalidation procedure, particularly generation of simulated PET images. 3 Basic Concepts Generally, an MRI to PET coregistration algorithm transforms an MR image M in such a way that each voxel in the transformed MR image M 0 and in the PET image P correspond to the same voxel in real brain space. Since images of both modalities are acquired from the same subject, a rigid body transformation T maps M to M 0. We assume that the scanning processes have not introduced non-rigid spatial transformations into either the PET or MR image or that these distortions are small. We also assume that no spatial scaling factor of the data is introduced by inaccurate voxel sizes. The required transformation can then be described by translations and rotations in three dimensions so that any coregistration algorithm computes six parameters: x-, y-, z-translation, pitch, roll and yaw. The main challenge in MRI and PET coregistration is that, even if both images are perfectly coregistered, there is no 'simple' function which maps voxel intensities from the MR uniquely to the PET image. For instance, scalp and white matter produce roughly the same voxel intensities within an MR image, but are associated with di erent voxel values within a PET image. Therefore, coregistration based only on the voxel intensities of both images is di cult, if not impossible. To address this issue, the images can be preprocessed to eschew this problem of non-unique mapping. An example is the often used (manual) scalp-editing of the MR image prior to coregistration. This sort of preprocessing renders the coregistration problem solvable, however, it is desirable to automate this preprocessing task and incorporate it into a coregistration routine, since manual scalp-editing can be laborious for the user. The SPM -algorithm is an example of this non supervised class of algorithm. Since we will use the standard (4 4)-matrix format throughout the text to represent a transformation T , this matrix format is described in the following. A (4 4)-transformation matrix, which we also interchangeably denote by T , is 4 given by T = BBBBBBBB@ t11 t12 t13 t14 t21 t22 t23 t24 t31 t32 t33 t34 0 0 0 1 CCCCCCCCA (1) where the elements of the rst three rows of T are chosen in a way such that BBBBBBBB@ y1 y2 y3 1 CCCCCCCCA = T BBBBBBBB@ x1 x2 x3 1 CCCCCCCCA (2) where x = (x1; x2; x3)0 is a voxel position in the original image M and y = (y1; y2; y3)0 is the according voxel position in M 0 after transformation with T . For instance, the matrix Tt = BBBBBBBB@ 1 0 0 t14 0 1 0 t24 0 0 1 t34 0 0 0 1 CCCCCCCCA (3) would represent a three dimensional translation by t14 in x-, t24 in yand t34 in z-direction (voxels). The rotations rx; ry; rz (radians) around the three axes (pitch, roll and yaw) are similarly coded by Trx = BBBBBBBB@ 1 0 0 0 0 cos(rx) sin(rx) 0 0 sin(rx) cos(rx) 0 0 0 0 1 CCCCCCCCA (4) Try = BBBBBBBB@ cos(ry) 0 sin(ry) 0 0 1 0 0 sin(ry) 0 cos(ry) 0 0 0 0 1 CCCCCCCCA (5) 5 Trz = BBBBBBBB@ cos(rz) sin(rz) 0 0 sin(rz) cos(rz) 0 0 0 0 1 0 0 0 0 1 CCCCCCCCA (6) The advantage of this matrix representation is that several sequentially applied transformations Ti, i = 1; : : : ; Q can be expressed by only one transformation matrix T : T = Q Y i=1Ti (7) (Note that T1 is not the rst, but the last applied transformation.) In this way, we can represent a rigid body transformation with six parameters by one transformation matrix. Another important property of these matrices is that the inverse of the matrix T (denoted by T 1) is also the inverse of the transformation T . SPM and AIR basically use the same format except for the order of rotations, which does matter as one can see from equations 4 6. In SPM , a rigid body transformation matrix is given by T = Tt Trx Try Trz and in AIR it is T = Tt Try Trx Trz . The AIR-algorithm We used the AIR 2.03 routine with the cost function based upon the standard deviation of the 'ratio image' for crossvalidation. Although version 2.03 of the AIR-package also o ers a second routine for coregistration, we chose the rst routine, since it had been described in (Woods et al., 1993) and already crossvalidated with other coregistration routines (Strother et al., 1994). The approach taken by the AIR-algorithm is to de ne a cost function for measuring the MRI and PET misregistration. A transformation matrix is then found by iterative, nonlinear minimization of this cost function. The AIR-algorithm is based on the key assumption that similar voxel intensities 6 in the MR image correspond to the same tissue type. We refer to this assumption as the 'similarity assumption' in the remainder of text. The similarity assumption is roughly valid, if the T1-weighted MR image is scalp-edited, i.e. voxels not corresponding to brain tissue are set to zero. This scalp-editing has to be done prior to coregistration. Assuming that the similarity assumption holds not only for MR images but also for PET images, the idea is to nd a rigid body transformation T so that all MR voxels with the same intensity value are mapped to a set of similar PET voxel intensity values. The similarity of a set of PET voxels can be measured by the ratio of its standard deviation and mean. Described more formally, the algorithm uses a histogram segmentation to segment the MR image into K voxel intensity partitions Pi and minimizes a weighted ratio of the standard deviations and means of the PET voxels corresponding to Pi by manipulating the transformation matrix T . The key formula of the algorithm is f(PET;MRI; T ) = 1 N K Xj nj j aj (8) where PET is the PET image, MRI is the MR image, T stands for the transformation matrix, nj is the number of MRI-voxels with an intensity of j, N = PKj nj, aj is the mean of all PET voxel values with a corresponding MRI voxel intensity in partition j, and j is their standard deviation. Changing T , for example translating the MRI in relation to the PET image by 1 mm, means changing the value of j aj for all j. The aim of the algorithm is to nd T such that the right hand side of equation 8 is minimized. This computation requires a nonlinear minimization scheme, because the additive components in equation 8 are not independent. The minimization method chosen is Southwell's Relaxation Search (Pierre, 1986), which is an iterative, univariate Newton{Raphson search. The overall minimization is performed iteratively, each step producing a new transformation matrix T , which is subsequently used in the next Newton-Raphson iteration. When the coregistration is good enough, i.e. the minimization process arrives at a global or local minimum 7 in the parameter space of T , or when a prespeci ed maximum of iteration steps is reached, the algorithm will stop and return the computed transformation matrix T . The SPM-algorithm The basic strategy of the SPM -algorithm is to transform the MR image to an emulated PET image and to map this image to the real PET image. The rst step is to segment the MR image into four partitions: grey matter, white matter, cerebrospinal uid (CSF) and 'everything else'. Because of this preprocessing segmentation it is possible to use either a scalp-edited or a non scalp-edited MR image for coregistration: The algorithm should segment any MRI into its four partitions. In the case of a non scalp-edited MRI, the 'everything else'-partition should contain scalp, skull and meninges. The segmentation routine is described in detail elsewhere (Friston et al., 1996) and works as follows: First, the MRI is tted to an MRI-template by means of a 12-parameter a ne transformation. For each voxel of the template, there exists a three element vector specifying the a{priori -probabilities for grey matter, white matter and CSF. This probability map is used to segment the tted MRI into its four partitions using an iterative fuzzy clustering scheme. The next step is to generate an emulated PET image by modifying the voxel intensities so that the ratio of the mean of white matter to the mean of grey matter voxels is 1:3 and the corresponding ratio of CSF to grey matter is 1:10. At this point, the remaining task is to nd a rigid body transformation T , which coregisters the emulated PET to the real PET image. The method used is to linearize the optimization problem and to iteratively nd a solution with linear methods. More precisely, each iteration solves an overdetermined linear equation system in a least-squares sense. 8 To linearize, two key assumptions are met by smoothing the emulated PET image with a Gaussian lter with 8 mm FWHM (full width at half maximum): 1. The e ects of applying a su ciently small transformation parameter (e.g. tx) to the emulated PET image can be approximated by some linear function of the intensity values using a rst order Taylor series approximation. 2. The e ects of applying su ciently small transformation parameters to the emulated PET image are independent. These two assumptions are the prerequisites to formulate the coregistration problem as the following linear equation system " PET tx PET ty PET tz PET rx PET ry PET rz 1 MRI #q = PET (9) where tx, ty, tz stand for x-, yand z-translation, rx, ry and rz for pitch, roll and yaw, PET is the real PET image, 1 is a column vector of ones and MRI denotes the emulated PET image. Note that all images (e.g. PET tx orPET) are represented as column vectors. Equation 9 assumes that the observed PET can be described by a linear combination of the emulated PET image and the additive e ects of the six transformation parameters applied to PET. The rst six column vectors of the left hand side | the numerical partial derivatives of the PET image with respect to the six transformation parameters | are known: small transformations are applied to the PET image and the change is measured. The solution, how the MR image has to be moved to approximate a good coregistration of both images, then consists of the rst six elements of the vector q. This step | solving for q and applying the determined parameters to the emulated PET image | is iterated a xed number of times to nd the best transformation matrix T (in the current implementation of SPM95 this number is 16). The last iteration step returns the nal T in terms of the rst six components of q. 9 Theoretical Comparison Clearly, the main di erence from a theoretical point of view between the AIRand SPM -algorithm is the applied optimization method for computing a transformation matrix T . Whereas the AIR-method employs a nonlinear Newton-Raphson search to x one parameter at a time, the SPM -method solves a linear equation system to compute six transformation parameters in one step. Both algorithms have to iterate these steps to converge on a good coregistration solution. It is important to note that the SPM{routine nds a solution in a nonlinear fashion, although a single iteration is only capable of nding a linear solution. For di erent input images, the required computing time varies for both routines. The AIR-routine is mainly dependent on the iterative optimization, whose convergence behaviour is variable (Woods et al., 1993). The SPM -routine consists of three tasks, which are a ne transformation, segmentation and iterative solving of the linear equation system. The required computing time of the third task is only dependent on the dimensionality of the images and the xed number of iterations, and is thus rather predictable. The a ne transformation and segmentation, however, use optimization routines, which are based on a quality measure of the solution found so that the required computing time varies for di erent input images. The required time for coregistering an MR and a PET image is ve to ten minutes for the AIR-routine (measured on a Sparc 20 workstation), whereas the SPM -routine needs roughly 20 minutes for segmentation and ve minutes for coregistration. However, note that these times are not directly comparable because the time required for the AIR coregistration does not include the necessary scalp editing of the MR image. If this scalp editing is done manually, the overall time of an AIR-routine coregistration depends on the skill of the user. A reasonably skilled user is able to manually edit an MR image in less than an hour. There are also third party sofware packages available, which can be used to edit MR 10 images in a semi-automatic fashion so that the required time is reduced further. Crossvalidation In this section, we describe, how we crossvalidated the AIRand the SPM routine. Basically, the crossvalidation consists of three steps: Generate simulated PET images Run the coregistration-routines Measure the error To generate simulated PET images, we applied a series of transformations to six MR images. This process is not exactly the same as in the SPM -routine to generate an emulated PET image. We will describe the rationale for not choosing the SPM -routine for generation of simulated PET images in the discussion. The T1-weighted MR images were generated on a 1 T scanner by a RF spoiled volume aquisition with TR = 21 ms, TE = 6 ms, ip angle = 35 degrees. The data were acquired as a 256 x 192 matrix. 140 contigous axial slices with a thickness of 1.3 mm were obtained, which were interpolated to generate 182 slices. The eld of view was 25 cm, resulting in voxel dimensions of 1 x 1 x 1 mm. The simulated PET images which were generated by the sequence described below, had a dimension of [128 128 40] in voxels and a voxel size of [2.05 2.05 3.43] so that the whole brain including the cerebellum was covered by the simulated PET scans. The MR image was scalp-edited and segmented by interactive thresholding into grey matter, white matter and CSF components. 11 The grey matter, white matter and CSF intensities were modi ed in such a way that the mean of the white matter voxels related to the mean of the grey matter voxels by a ratio of 1:3. The corresponding ratio of CSF to grey matter was 1:10. A rigid body transformation Torig, which simulated misregistration between MR and PET data, was applied and the resulting image was resampled to the PET voxel size. Torig was saved for future reference. A trilinear interpolation was used to implement the sampling. The image was smoothed in each direction to a resolution of 7mm FWHM by applying a three-dimensional Gaussian lter. Gaussian white noise was added at 30% of the mean intensity value of brain voxels to simulate the worse signal to noise ratio (SNR) of the PET image. The image was smoothed to PET resolution ( 8mm FWHM) by another three-dimensional Gaussian lter with a FWHM of 4 mm in each direction. This special order of ltering and noising was applied to visually imitate the typical PET intracranial texture. Figure 1 shows a slice of such a transformed simulated PET image. Comparing it to approximately the same slice in the corresponding real PET of the same subject shows that the simulated PET visually resembles the real PET. Figure 1 about here. This process was repeated 32 times for each MR image to generate a series of simulated PET images. In this way, a data set consisting of six MR images and 192 simulated PET images was generated. Since the original transformation matrices Torig are known for all simulated PET images, the error of the estimated transformation matrix T can be computed by Terr = Torig T 1 and the performance of the coregistration routines in terms of transformation parameters can 12 be compared. To extract the error of the six rigid body transformation parameters tx; ty; tz; rx; ry and rz from Terr, the inverse calculations of those described on page 4 to 6 have to be applied. The parameter errors are given by error(tx) = Terr(1; 4) error(ty) = Terr(2; 4) error(tz) = Terr(3; 4) error(ry) = sin 1(Terr(1; 3)) error(rz) = sin 1(Terr(1; 2)=cos(ry)) error(rx) = sin 1(Terr(2; 3)=cos(ry)) (10) where the order of rotation is de ned as yaw, roll and pitch (rz; ry; rx). We tested the AIR-routine and two options of the SPM -routine, where one option was to use scalp-edited MR images and the second was to use non scalpedited MR images. As a result, we produced 6 32 4 = 768 transformation matrices. To examine various aspects of the coregistration, we used three types of transformation matrices Torig: The rst simulated typical coregistration mismatches by using random values. In the other two types only one transformation parameter was varied, the z-translation tz or the pitch rx. These were chosen, because estimating tz and rx is normally the most di cult task of the coregistration routines. Estimating tx, ty, ry and rz are in general easier because of the symmetry of the brain. To change only one parameter and hold the remaining parameters constant can reveal some information about the in uence of this parameter on the overall error. Two MR images were assigned to each of these three manipulations or groups. The random translations of the rst group had a Gaussian distribution with N(0; 5) [mm] and the random rotations were also Gaussian with N(0; 3) [degrees]. In the second group, we only changed the z-translations tz of the simulated PET images, between 0 mm to 62 mm with an increase of 2 mm per 13 image. For the simulated PET images of the third group, we changed the pitch rx, between 0 to 15.5 degrees in 0.5 degree steps. Since we generated a number of simulated PET images from each of the six MR images, we implicitly assumed that the variability of coregistrations between each MR image and its derived simulated PET images was the same as were individual MR-PET image pairs used. Results The results of all coregistration runs are shown in tables 1 6. In each of these tables the mean and the standard deviation of the errors of the coregistration routines with respect to the six transformation parameters for one group of simulated PET images are displayed so that each of the six original MR images is represented by a separate table. Since each of these six groups comprises 32 PET images, the standard deviations displayed in the tables have 31 degrees of freedom. Table 1 and 2 show the results of the groups, in which random transformations were applied to the simulated PET images. Tables 3 and 4 refer to the groups of z-translated simulated PET images and tables 5 and 6 to the groups of x{rotated (pitch) simulated PET images. Table 1 about here. Table 2 about here. Table 3 about here. Table 4 about here. Table 5 about here. Table 6 about here. For illustration we chose a group (corresponding to table 6), and displayed in gure 2 the translational and rotational errors of both routines. The rst row of the gure displays the translational errors [mm] as functions of the three translation 14 parameters tx, ty and tz and the second row shows the rotational errors rx, ry and rz [degrees] for all axes. Figure 2 about here. The individual performance of both routines can be summarized as follows: SPM-routine with scalp-edited MR images: In gure 2, the results of the SPM -routine with prior scalp editing is displayed as a solid line. From the tables 1 6, one can see that the mean error of translation in x-, yand z-direction is below one millimeter except in three instances (z-translation in tables 1, 2 and 3). The standard deviations of the translation errors are small throughout all the simulated PET data sets. Regarding the estimation of the rotations, the mean error of pitch-, rolland yaw-estimates is always smaller than 0.60 degree and often close to zero. SPM-routine with non scalp-edited MR images: The results of the SPM routine without prior scalp editing are displayed as a dotted line in gure 2. Except for one instance (table 2) the mean errors of translation and rotation are larger if the input of the coregistration routine is a non scalp-edited MR image. This is most obvious in table 6, where the mean errors of yand z-translations are 2.16 and 2.91 mm resp. compared to 0.02 and 0.62 mm (SPM -routine with scalp-edited MRIs). However, in most cases the increase of mean error (scalp-edited vs. non scalp-edited MRI) is at most around one millimeter in translation and 0.3 degree in rotation. AIR-routine: The results of the AIR-routine are displayed as a dashed line in gure 2. The mean translation errors are always smaller than 1 millimeter except in one instance (z-translation in table 1) and have a small standard deviation. The mean rotation errors are close to zero degrees in all experiments; the maximal error is 0.40 degree (table 1 pitch estimate). One can also see from gure 2 that the error of both routines in terms of the six 15 parameters is independent on the actual x-rotation, even if it is as high as a pitch of 15.5 degrees. This independence of error on actual displacements also holds for the other ve groups. On a Sparc-20 computer, the time required for the AIR coregistration is roughly ve to ten minutes. The SPM -routine (inc. non interactive segmentation of the MR image) needs roughly 20 minutes for segmentation and 5 minutes for coregistration. These times are not directly comparable without considering the additional time required for scalp editing the MR image, which is a necessary preprocessing step for the AIR-routine. See the section Theoretical Comparison for an estimation of the required time to scalp edit an MR image. Discussion The basic result of our crossvalidation is that AIRand SPM -coregistration perform well on our simulated PET data set. The maximal mean z-translation errors made by the AIR-routine and by the SPM -routine are 1.44 mm and 1.67 mm respectively. The maximal mean rotation error is smaller than 0.40 degree for the AIR-routine and 0.60 degree for the SPM -routine. The performance of the SPM -routine decreases, if the MR image is not scalp-edited. The maximal mean z-translation error is 2.91 mm and the maximal mean rotation error is 1.16 degrees. Both routines provide accurate solutions to the coregistration problem on scalpedited MR images. Moreover, the small standard deviations of the coregistration parameters suggest that the routines are robust. For most applications, the mean error made by the SPM-routine on non scalpedited MR images is still acceptable. However, if optimal accuracy is needed, manual or semi-automatic scalp-editing of the MR image is recommended when using the SPM -routine. 16 The correct interpretation of these results demands some further understanding of the validation process. First of all, one has to accept that our simulated PET data is a good approximation to reality. It seems appropriate to simulate the relative voxel intensities of the PET by the ratio 1:3:10 (CSF : white matter : grey matter), scale the image to PET voxel size, smooth, add some noise, and nally smooth the resulting image to PET resolution. After these steps, the simulated PET images bear a strong resemblance to real PET images as can be seen in gure 1. It could be argued that any deviation from the white : grey matter (3:10) or CSF : gray matter (1:10) ratio might decrease the performance of the SPM -routine, since the emulated PET image is generated with these ratios. However, this is not a real limitation on the accuracy of the SPM -routine. As one can see from equation 9, the polynomial t between the emulated PET on the left side and simulated PET on the right side of the equation will compensate for most types of a deviation from the assumption about the ratios, particularly for a linear scaling of the ratio. Moreover, in our opinion, the most important image features for a thorough test of MRI to PET coregistration routines are the outer brain boundaries and the boundaries between grey and white matter so that the exact ratio between the three partitions only plays a minor role. Both routines rely inherently on these boundaries: The AIR-routine segments the MR image into K voxel intensity partitions (see above), and tries to nd a transformation, which maps similar PET voxel intensities into each partition. Assuming that the similarity assumption holds for the PET and MR image, it is clear that a suboptimal transformation is 'punished' by the cost function, because the weighted sum of standard deviations j will increase. This e ect will work best if the di erence in voxel intensity between roughly homogeneous regions (grey, white matter and non brain) is large and the transition between areas is sharp. Although the intrinsic smoothness and partial volume e ects of a PET image render the boundaries less sharp, this feature enables the coregistration process to more reliably nd the global minimum 17 within the parameter space. The dependence of the SPM -coregistration on image boundaries is also quite obvious. To nd a set of coregistration parameters T the partial derivatives PET T of the PET image with respect to the transformation parameters T (equation 9) are used. As a result, the image boundaries and the relative voxel intensities of di erent brain tissues are important image features. The amount of added noise does not seem to play an important role. Both routines are robust for various noise levels. Assuming that our simulated PET data are suitable for testing coregistration routines, we can consider the rationale for using a manual segmentation to generate the simulated PET images. As described above, SPM uses its own noninteractive segmentation routine to generate an emulated PET image, whereas AIR segments the MR image into di erent intensity partitions. We did not choose the same non-interactive segmentation of SPM for generating the simulated PET images, but chose a manual one, since we wanted to carry out a validation of the whole SPM -routine, including its preprocessing segmentation step. Moreover, the crossvalidation would have probably been biased in favour of SPM , if we used the same method to create the simulated and emulated PET images. Concerning scalp-edited MR images, the performance of the AIRand the SPM routine seem to be comparable. This shows that both routines reliably solve the coregistration problem, if the MR image is scalp-edited. It also indicates that the segmentation process of the SPM -routine works accurately on scalp-edited MR images. As stated above, the performance of the SPM -routine worsens, if non scalpedited MR images are coregistered. Visual inspection of the MR images shows that the internal segmentation of the SPM -routine misclassi es a small part of the meninges as grey matter. This most likely causes the observed decrease in performance compared to coregistration based on scalp-edited MR images. 18 From gure 2, one can also see that at least some of the parameters are highlydependent on each other. This is because the primary aim of both coregistrationroutines is to match the outer brain boundaries. In the gure, it can be seen thatthe SPM -routine based on a non scalp-edited MR image makes relatively largeerrors on the y-, z-translation and the pitch. This combination of a positive pitch,a positive y-translation and a negative z-translation error is a suitable parametercombination to preserve the matching of the outer brain boundaries, althoughthe overall error is relatively large.ConclusionIn this paper, we showed that the routines examined reliably solve the MRI toPET coregistration problem for simulated PET data. Although the study islimited in the sense that we used simulated PET data, this work gives to thefunctional imaging community a sense of the coregistration errors that should beexpected using the two methods reviewed and provide a crossvalidation of thesealgorithms and their implementation.AcknowledgmentsThe work in this paper is supported by an EU grant (Biomed I: CT94/1261). JBPis funded by an EU grant, Human Capital and Mobility. JA and KJF are fundedby the Wellcome Trust. We thank Cornelius Weiller and Richard Frackowiak fortheir help and support. We also thank Michael Krams for contributing the MRimages used above.19 ReferencesAndersson, J. L., Sundin, A., and Valind, S. 1995. A method for coregistrationof PET and MR brain images. Journal of Nuclear Medicine, 36:1307{15.Ardekani, B., Braun, M., Hutton, B., Kanno, I., and Iida, H. 1995. A fullyautomatic multimodality image registration algorithm. Journal of ComputerAssisted Tomography, 19:615{23.Evans, A., Collins, D., Neelin, P., MacDonald, D., Kamber, M., and Marrett,S. 1994. Three dimensional correlative imaging: Application in humain brainmapping. In Thacher, R., Hallett, M., Ze ro, T., John, E., and Huerta, M.,editors, Advances in Neuroimaging: Multimodal Registration, pages 145{162.Evans, A., Marret, S., Torrescorzo, J., Ku, S., and Collins, L. 1991. MRI-PETcorrelation in three dimensions using a volume-of-interest (VOI) atlas. Journalof Cerebral Blood Flow in Metabolism, 11:A69{78.Friston, K. J., Ashburner, J., Frith, C. D., Poline, J.-B., Heather, J. D., andFrackowiak, R. S. 1995. Spatial Registration and Normalization of Images.Human Brain Mapping, 2:165{89.Friston, K. J., Ashburner, J., Holmes, A., Poline, J.-B., Buchel, C., Price, C. J.,Turner, R., Howseman, A., Rees, G. E., Greene, J. D., and Josephs, O. 1996.SPM short course. available at http://www. l.ion.ucl.ac.uk.Ge, Y., Fitzpatrick, J., Votaw, J., Gadamsetty, S., Maciunas, R., Kessler, R., andMargolin, R. 1994. Retrospective registration of PET and MR brain images:an algorithm and its stereotactic validation. Journal of Computer AssistedTomography, 18:800{810.Mangin, J., Frouin, V., Bloch, I., Bendriem, B., and Lopez-Krahe, J. 1994. Fastnonsupervised 3d registration of PET and MR images of the brain. Journalof Cerebral Blood Flow and Metabolism, 14:749{62.Pelizzari, C., Evans, A., Neelin, P., Chen, C.-T., and Marret, S. 1991. Comparisonof two methods for three-dimensional registration of PET and MR images.Proc IEEE-EMBS.20 Pierre, D. A. 1986. Optimization Theory with Applications. Dover Publications,Inc., New York.Strother, S. C., Anderson, J. R., Xu, X.-L., Liow, J.-S., Bonar, D. C., and Rotten-berg, D. R. 1994. Quantitative Comparisons of Image Registration TechniquesBased on High-Resolution MRI of the Brain. Journal of Computer AssistedTomography, 18:954{62.Turkington, T., Ho man, J., Jaszczak, R., MacFall, J., Harris, C., Kilts, C.,Pelizzari, C., and Coleman, R. 1995. Accuracy of surface t registration forPET and MR brain images using full and incomplete brain surfaces. Journalof Computer Assisted Tomography, 19:117{24.Woods, R. P., Cherry, S. R., and Mazziotta, J. C. 1992. Rapid automated al-gorithm for aligning and reslicing pet images. Journal of Computer AssistedTomography, 16:620{33.Woods, R. P., Mazziotta, J. C., and Cherry, S. R. 1993. MRI-PET Registra-tion with Automated Algorithm. Journal of Computer Assisted Tomography,17:536{46.21 Table 1: group random transformations 1: translational [mm] and rotational [de-grees] mean errors for 32 simulated PET images, to which random transformations(translations N(0; 5) [mm], rotations N(0; 3) [degrees]) were appliedSPM withSPM withoutAIRscalp editingscalp editingmean std.dev. mean std.dev. mean std.dev.x-transl. -0.012 0.145 0.670 0.236 0.039 0.220y-transl. -0.422 0.175 -1.150 0.172 0.631 0.259z-transl. 1.478 0.297 2.071 0.295 -1.437 0.370pitch-0.599 0.086 -1.131 0.082 0.401 0.093roll0.024 0.091 -0.145 0.151 0.044 0.037yaw0.120 0.060 -0.260 0.078 -0.012 0.02922 Table 2: group random transformations 2: translational [mm] and rotational [de-grees] mean errors for 32 simulated PET images, to which random transformations(translations N(0; 5) [mm], rotations N(0; 3) [degrees]) were appliedSPM withSPM withoutAIRscalp editingscalp editingmean std.dev. mean std.dev. mean std.dev.x-transl. 0.246 0.179 0.125 0.289 0.029 0.283y-transl. -0.538 0.232 0.368 0.205 0.457 0.261z-transl. 1.668 0.386 1.265 0.365 -0.973 0.421pitch-0.530 0.160 -0.345 0.154 0.353 0.119roll0.054 0.100 0.051 0.196 0.026 0.055yaw0.020 0.051 -0.012 0.069 0.009 0.03923 Table 3: group z-translation 1: translational [mm] and rotational [degrees] meanerrors for 32 simulated PET images, to which translations in z{direction (rangingfrom 0 to 62 mm) were appliedSPM withSPM withoutAIRscalp editingscalp editingmean std.dev. mean std.dev. mean std.dev.x-transl. 0.167 0.080 -1.436 0.110 -0.072 0.061y-transl. -0.353 0.121 1.324 0.242 0.426 0.064z-transl. 1.132 0.254 -0.028 0.374 -0.872 0.106pitch-0.235 0.073 0.462 0.133 0.296 0.048roll-0.076 0.065 -0.469 0.080 0.026 0.052yaw-0.033 0.038 0.667 0.058 0.035 0.02424 Table 4: group z-translation 2: translational [mm] and rotational [degrees] meanerrors for 32 simulated PET images, to which translations in z{direction (rangingfrom 0 to 62 mm) were appliedSPM withSPM withoutAIRscalp editingscalp editingmean std.dev. mean std.dev. mean std.dev.x-transl. 0.247 0.078 0.462 0.107 0.049 0.052y-transl. -0.153 0.094 0.788 0.142 0.325 0.070z-transl. 0.692 0.171 -0.708 0.305 -0.824 0.120pitch-0.161 0.053 0.475 0.091 0.241 0.043roll-0.014 0.034 -0.295 0.056 0.060 0.022yaw-0.027 0.038 -0.293 0.041 0.007 0.02125 Table 5: group pitch 1: translational [mm] and rotational [degrees] mean errorsfor 32 simulated PET images, to which rotations around the x{axis (ranging from0 to 15.5 degrees) were appliedSPM withSPM withoutAIRscalp editingscalp editingmean std.dev. mean std.dev. mean std.dev.x-transl. 0.236 0.097 0.310 0.102 0.021 0.058y-transl. 0.105 0.131 0.506 0.127 0.653 0.226z-transl. 0.165 0.199 -1.267 0.243 -0.585 0.109pitch-0.068 0.067 -0.006 0.085 0.097 0.080roll0.084 0.057 0.023 0.074 -0.033 0.029yaw0.039 0.048 0.051 0.058 -0.044 0.02326 Table 6: group pitch 2: translational [mm] and rotational [degrees] mean errorsfor 32 simulated PET images, to which rotations around the x{axis (ranging from0 to 15.5 degrees) were appliedSPM withSPM withoutAIRscalp editingscalp editingmean std.dev. mean std.dev. mean std.dev.x-transl. 0.047 0.118 0.548 0.116 0.037 0.058y-transl. -0.017 0.124 2.161 0.199 0.719 0.289z-transl. 0.617 0.225 -2.910 0.319 -0.869 0.073pitch-0.280 0.069 1.162 0.111 0.107 0.033roll0.055 0.060 0.045 0.081 0.053 0.026yaw0.106 0.059 -0.155 0.061 -0.010 0.03027 Figure 1:Visual comparison of a simulated PET with a real PET slice of the same subject(left) real pet (right) simulated PETFigure 2:group pitch 2: translational [mm] and rotational [degrees] errors for 32 simu-lated PET images, to which rotations around the x{axis (ranging from 0 to 15.5degrees) were applied (dashed line) AIR-routine (solid line) SPM -routine withscalp-edited MR images (dotted line) SPM -routine with non scalp-edited images28

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

MRI and PET coregistration--a cross validation of statistical parametric mapping and automated image registration.

Coregistration of functional PET and T1-weighted MR images is a necessary step for combining functional information from PET images with anatomical information in MR images. Several coregistration algorithms have been published and are used in functional brain imaging studies. In this paper, we present a comparison and cross validation of the two most widely used coregistration routines (Fristo...

متن کامل

Coregistration of PET/MR Brain Images by Multi-Feature Correlation Matching - Biomedical Engineering Conference, 1996., Proceedings of the 1996 Fifteenth Southern

Medical images analysis is becoming increasingly important in clinical applications. One of the active resarch areas in medical image analysis is image coregistration which involves information fusion of tomographic diagnostic images obtained from different modalities. We present a novel MRI and PET brain image coregistration technique using binary correlation matching based on multiple image f...

متن کامل

FDG-PET/MRI coregistration improves detection of cortical dysplasia in patients with epilepsy.

OBJECTIVE Patients with cortical dysplasia (CD) are difficult to treat because the MRI abnormality may be undetectable. This study determined whether fluorodeoxyglucose (FDG)-PET/MRI coregistration enhanced the recognition of CD in epilepsy surgery patients. METHODS Patients from 2004-2007 in whom FDG-PET/MRI coregistration was a component of the presurgical evaluation were compared with pati...

متن کامل

An Evaluation of the Benefits of Simultaneous Acquisition on PET/MR Coregistration in Head/Neck Imaging

Coregistration of multimodal diagnostic images is crucial for qualitative and quantitative multiparametric analysis. While retrospective coregistration is computationally intense and could be inaccurate, hybrid PET/MR scanners allow acquiring implicitly coregistered images. Aim of this study is to assess the performance of state-of-the-art coregistration methods applied to PET and MR acquired a...

متن کامل

An interactive technique for three-dimensional image registration: validation for PET, SPECT, MRI and CT brain studies.

UNLABELLED A multipurpose three-dimensional registration technique was validated with PET, SPECT, CT and MRI scans, which had been obtained under normal clinical conditions. In contrast to fully automated procedures, this coregistration method is highly interactive, which has the advantage that it does not impose rigid restrictions by data type and by alterations in normal anatomy or brain func...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2007